60 research outputs found
SemanticSLAM: Learning based Semantic Map Construction and Robust Camera Localization
Current techniques in Visual Simultaneous Localization and Mapping (VSLAM)
estimate camera displacement by comparing image features of consecutive scenes.
These algorithms depend on scene continuity, hence requires frequent camera
inputs. However, processing images frequently can lead to significant memory
usage and computation overhead. In this study, we introduce SemanticSLAM, an
end-to-end visual-inertial odometry system that utilizes semantic features
extracted from an RGB-D sensor. This approach enables the creation of a
semantic map of the environment and ensures reliable camera localization.
SemanticSLAM is scene-agnostic, which means it doesn't require retraining for
different environments. It operates effectively in indoor settings, even with
infrequent camera input, without prior knowledge. The strength of SemanticSLAM
lies in its ability to gradually refine the semantic map and improve pose
estimation. This is achieved by a convolutional long-short-term-memory
(ConvLSTM) network, trained to correct errors during map construction. Compared
to existing VSLAM algorithms, SemanticSLAM improves pose estimation by 17%. The
resulting semantic map provides interpretable information about the environment
and can be easily applied to various downstream tasks, such as path planning,
obstacle avoidance, and robot navigation. The code will be publicly available
at https://github.com/Leomingyangli/SemanticSLAMComment: 2023 IEEE Symposium Series on Computational Intelligence (SSCI) 6
page
Neuromorphic Online Learning for Spatiotemporal Patterns with a Forward-only Timeline
Spiking neural networks (SNNs) are bio-plausible computing models with high
energy efficiency. The temporal dynamics of neurons and synapses enable them to
detect temporal patterns and generate sequences. While Backpropagation Through
Time (BPTT) is traditionally used to train SNNs, it is not suitable for online
learning of embedded applications due to its high computation and memory cost
as well as extended latency. Previous works have proposed online learning
algorithms, but they often utilize highly simplified spiking neuron models
without synaptic dynamics and reset feedback, resulting in subpar performance.
In this work, we present Spatiotemporal Online Learning for Synaptic Adaptation
(SOLSA), specifically designed for online learning of SNNs composed of Leaky
Integrate and Fire (LIF) neurons with exponentially decayed synapses and soft
reset. The algorithm not only learns the synaptic weight but also adapts the
temporal filters associated to the synapses. Compared to the BPTT algorithm,
SOLSA has much lower memory requirement and achieves a more balanced temporal
workload distribution. Moreover, SOLSA incorporates enhancement techniques such
as scheduled weight update, early stop training and adaptive synapse filter,
which speed up the convergence and enhance the learning performance. When
compared to other non-BPTT based SNN learning, SOLSA demonstrates an average
learning accuracy improvement of 14.2%. Furthermore, compared to BPTT, SOLSA
achieves a 5% higher average learning accuracy with a 72% reduction in memory
cost.Comment: 9 pages,8 figure
Exploiting Neuron and Synapse Filter Dynamics in Spatial Temporal Learning of Deep Spiking Neural Network
The recent discovered spatial-temporal information processing capability of
bio-inspired Spiking neural networks (SNN) has enabled some interesting models
and applications. However designing large-scale and high-performance model is
yet a challenge due to the lack of robust training algorithms. A bio-plausible
SNN model with spatial-temporal property is a complex dynamic system. Each
synapse and neuron behave as filters capable of preserving temporal
information. As such neuron dynamics and filter effects are ignored in existing
training algorithms, the SNN downgrades into a memoryless system and loses the
ability of temporal signal processing. Furthermore, spike timing plays an
important role in information representation, but conventional rate-based spike
coding models only consider spike trains statistically, and discard information
carried by its temporal structures. To address the above issues, and exploit
the temporal dynamics of SNNs, we formulate SNN as a network of infinite
impulse response (IIR) filters with neuron nonlinearity. We proposed a training
algorithm that is capable to learn spatial-temporal patterns by searching for
the optimal synapse filter kernels and weights. The proposed model and training
algorithm are applied to construct associative memories and classifiers for
synthetic and public datasets including MNIST, NMNIST, DVS 128 etc.; and their
accuracy outperforms state-of-art approaches
A Simulation Framework for Fast Design Space Exploration of Unmanned Air System Traffic Management Policies
The number of daily small Unmanned Aircraft Systems (sUAS) operations in
uncontrolled low altitude airspace is expected to reach into the millions. UAS
Traffic Management (UTM) is an emerging concept aiming at the safe and
efficient management of such very dense traffic, but few studies are addressing
the policies to accommodate such demand and the required ground infrastructure
in suburban or urban environments. Searching for the optimal air traffic
management policy is a combinatorial optimization problem with intractable
complexity when the number of sUAS and the constraints increases. As the
demands on the airspace increase and traffic patterns get complicated, it is
difficult to forecast the potential low altitude airspace hotspots and the
corresponding ground resource requirements. This work presents a Multi-agent
Air Traffic and Resource Usage Simulation (MATRUS) framework that aims for fast
evaluation of different air traffic management policies and the relationship
between policy, environment and resulting traffic patterns. It can also be used
as a tool to decide the resource distribution and launch site location in the
planning of a next-generation smart city. As a case study, detailed comparisons
are provided for the sUAS flight time, conflict ratio, cellular communication
resource usage, for a managed (centrally coordinated) and unmanaged (free
flight) traffic scenario.Comment: The Integrated Communications Navigation and Surveillance (ICNS)
Conference in 201
A Hierarchical Framework of Cloud Resource Allocation and Power Management Using Deep Reinforcement Learning
Automatic decision-making approaches, such as reinforcement learning (RL),
have been applied to (partially) solve the resource allocation problem
adaptively in the cloud computing system. However, a complete cloud resource
allocation framework exhibits high dimensions in state and action spaces, which
prohibit the usefulness of traditional RL techniques. In addition, high power
consumption has become one of the critical concerns in design and control of
cloud computing systems, which degrades system reliability and increases
cooling cost. An effective dynamic power management (DPM) policy should
minimize power consumption while maintaining performance degradation within an
acceptable level. Thus, a joint virtual machine (VM) resource allocation and
power management framework is critical to the overall cloud computing system.
Moreover, novel solution framework is necessary to address the even higher
dimensions in state and action spaces. In this paper, we propose a novel
hierarchical framework for solving the overall resource allocation and power
management problem in cloud computing systems. The proposed hierarchical
framework comprises a global tier for VM resource allocation to the servers and
a local tier for distributed power management of local servers. The emerging
deep reinforcement learning (DRL) technique, which can deal with complicated
control problems with large state space, is adopted to solve the global tier
problem. Furthermore, an autoencoder and a novel weight sharing structure are
adopted to handle the high-dimensional state space and accelerate the
convergence speed. On the other hand, the local tier of distributed server
power managements comprises an LSTM based workload predictor and a model-free
RL based power manager, operating in a distributed manner.Comment: accepted by 37th IEEE International Conference on Distributed
Computing (ICDCS 2017
- …